88 research outputs found

    Small but important – Building metacommunicative interaction mechanisms into conversational agents

    Get PDF

    Understanding How Well You Understood – Context-sensitive Interpretation of Multimodal User Feedback

    Get PDF
    Buschmeier H, Kopp S. Understanding How Well You Understood – Context-sensitive Interpretation of Multimodal User Feedback. In: Proceedings of the 12th International Conference on Intelligent Virtual Agents. Santa Cruz, CA; 2012: 517-519

    Attentive Speaking. From Listener Feedback to Interactive Adaptation

    Get PDF
    Buschmeier H. Attentive Speaking. From Listener Feedback to Interactive Adaptation. Bielefeld: UniversitĂ€t Bielefeld; 2018.Dialogue is an interactive endeavour in which participants jointly pursue the goal of reaching understanding. Since participants enter the interaction with their individual conceptualisation of the world and their idiosyncratic way of using language, understanding cannot, in general, be reached by exchanging messages that are encoded when speaking and decoded when listening. Instead, speakers need to design their communicative acts in such a way that listeners are likely able to infer what is meant. Listeners, in turn, need to provide evidence of their understanding in such a way that speakers can infer whether their communicative acts were successful. This is often an interactive and iterative process in which speakers and listeners work towards understanding by jointly coordinating their communicative acts through feedback and adaptation. Taking part in this interactive process requires dialogue participants to have ‘interactional intelligence’. This conceptualisation of dialogue is rather uncommon in formal or technical approaches to dialogue modelling. This thesis argues that it may, nevertheless, be a promising research direction for these fields, because it de-emphasises raw language processing performance and focusses on fundamental interaction skills. Interactionally intelligent artificial conversational agents may thus be able to reach understanding with their interlocutors by drawing upon such competences. This will likely make them more robust, more understandable, more helpful, more effective, and more human-like. This thesis develops conceptual and computational models of interactional intelligence for artificial conversational agents that are limited to (1) the speaking role, and (2) evidence of understanding in form of communicative listener feedback (short but expressive verbal/vocal signals, such as ‘okay’, ‘mhm’ and ‘huh’, head gestures, and gaze). This thesis argues that such ‘attentive speaker agents’ need to be able (1) to probabilistically reason about, infer, and represent their interlocutors’ listening related mental states (e.g., their degree of understanding), based on their interlocutors’ feedback behaviour; (2) to interactively adapt their language and behaviour such that their interlocutors’ needs, derived from the attributed mental states, are taken into account; and (3) to decide when they need feedback from their interlocutors and how they can elicit it using behavioural cues.This thesis describes computational models for these three processes, their integration in an incremental behaviour generation architecture for embodied conversational agents, and a semi-autonomous interaction study in which the resulting attentive speaker agent is evaluated. The evaluation finds that the computational models of attentive speaking developed in this thesis enable conversational agents to interactively reach understanding with their human interlocutors (through feedback and adaptation) and that these interlocutors are willing to provide natural communicative listener feedback to such an attentive speaker agent. The thesis shows that computationally modelling interactional intelligence is generally feasible, and thereby raises many new research questions and engineering problems in the interdisciplinary fields of dialogue and artificial conversational agents

    A dynamic minimal model of the listener for feedback-based dialogue coordination

    Get PDF
    Buschmeier H, Kopp S. A dynamic minimal model of the listener for feedback-based dialogue coordination. In: DialWatt - SemDial 2014: Proceedings of the 18th Workshop on the Semantics and Pragmatics of Dialogue. Proceedings (SemDial). Edinburgh, UK; 2014: 17-25.Although the notion of grounding in dialogue is widely acknowledged, the exact nature of the representations of common ground and its specific role in language processing are topics of ongoing debate. Proposals range from rich, explicit representations of common ground in the minds of speakers (Clark,1996) to implicit representations, or even none at all (Pickering and Garrod, 2004). We argue that a minimal model of mentalising that tracks the interlocutor's state in terms of general states of perception, understanding, acceptance and agreement, and is continuously updated based on communicative listener feedback, is a viable and practical concept for the purpose of building conversational agents. We present such a model based on a dynamic Bayesian network that takes listener feedback and dialogue context into account, and whose temporal dynamics are modelled with respect to discourse structure. The potential benefit of this approach is discussed with two applications: generation of feedback elicitation cues, and anticipatory adaptation

    Probabilistic pragmatic inference of communicative feedback meaning

    Get PDF
    Buschmeier H, Kopp S. Probabilistic pragmatic inference of communicative feedback meaning. In: Abstracts of the 16th International Pragmatics Conference. 2019: 480–481.Communicative feedback is an expression of addressees’ listening-related mental states that parallels and influences their dialogue partners’ speech production (Clark 1996) by expressing ‘basic communication functions’ (e.g., perception, understanding, acceptance; Allwood et al. 1992). When occurring in the form of pragmatic interjections (e.g., ‘mm’, ‘huh?’) feedback occurs in a large number of forms. Applying phonologic, morphologic, or syntactic operations results in a combinatorially growing space of feedback expressions. These can be further varied using nonverbal markers (prosody, gesture; Freigang et al. 2017), which add continuous dimensions to the feedback form-space. Humans exploit this richness in form to enrich feedback meaning with attitudinal or epistemic components and to express subtle differences on various dimensions (e.g., certainty, degree of understanding, ongoing cognitive processing). Although the mapping between the form of feedback and its meaning has some aspects that are conventionalised, feedback meaning is idiosyncratic and relies heavily on iconic properties and – as a purely interactional phenomenon – on its dialogue context. Because of this, we see communicative feedback as a ‘model phenomenon’ of language processing that allows for modelling the cognitive processes underlying pragmatic reasoning in language use without the need to model all of language. We present a computational model of feedback interpretation (Buschmeier 2018), which embodies a probabilistic approach to pragmatic inference (Goodman and Frank 2016) and conceptualises speakers’ feedback interpretation as attribution of listening-related mental states to their feedback-providing interlocutors. Given an addressee’s feedback and its dialogue context, the model attributes a second order belief-state to the addressee (a probability distribution over their listening-related mental states, such as perception, understanding, acceptance, etc.). The model is thus able to (1) represent and reason about a speaker’s degree of belief in the dimensions and grades of their listener’s listening-related mental states (e.g., there is a high probability that the listener's understanding is estimated to be low). And (2) model the traditional semantic and pragmatic processes assumed to underly the hierarchical relationship of feedback functions (Allwood et al. 1992, Bunt 2011), namely ‘upward completion’ (Clark 1996) and ‘upper-bound implicata’ generated by the cooperative principle (Horn 2004). We combined this model of feedback interpretation with an incrementally adaptive natural language generation model in an artificial conversational agent and evaluated it in a semi-autonomous Wizard-of-Oz study (Buschmeier 2018). Autonomously interpreting its human interlocutors’ multimodal feedback and adapting to their needs, this ‘attentive speaker agent’ communicated more efficiently than an agent that explicitly ensured participants’ understanding. Participants rated the agent more helpful and cooperative and found it to be able to understand their mental state of listening. * Allwood et al. (1992). On the semantics and pragmatics of linguistic feedback. https://doi.org/10.1093/jos/9.1.1 * Bunt (2011). Multifunctionality in dialogue. https://doi.org/​10.1016/j.csl.2010.04.006 * Buschmeier (2018). _Attentive Speaking. From Listener Feedback to Interactive Adaptation_. PhD thesis, Bielefeld University. https://doi.org/​10.4119/unibi/2918295 * Clark (1996). _Using Language_. https://doi.org/​10.1017/CBO9780511620539 * Freigang et al. (2017). Pragmatic multimodality: Effects of nonverbal cues of focus and certainty in a virtual human. https://doi.org/​10.1007/978-3-319-67401-8_16 * Horn (2004). Implicature. https://doi.org/​10.1002/9780470756959.ch1 * Goodman & Frank (2016). Pragmatic language interpretation as probabilistic inference. https://doi.org/​10.1016/j.tics.2016.08.00

    Should robots be polite? Expectations about politeness in human–robot interaction

    Get PDF
    Interaction with artificial social agents is often designed based on models of human interaction and dialogue. While this is certainly useful for basic interaction mechanisms, it has been argued that social communication strategies and social language use, a “particularly human” ability, may not be appropriate and transferable to interaction with artificial conversational agents. In this paper, we present qualitative research exploring whether users expect artificial agents to use politeness—a fundamental mechanism of social communication—in language-based human-robot interaction. Based on semi-structured interviews, we found that humans mostly ascribe a functional, rule-based use of polite language to humanoid robots and do not expect them to apply socially motivated politeness strategies that they expect in human interaction. This study 1) provides insights for interaction design for social robots’ politeness use from a user perspective, and 2) contributes to politeness research based on the analysis of our participants’ perspectives on politeness

    Conversational agents need to be ‘attentive speakers’ to receive conversational feedback from human interlocutors

    Get PDF
    Buschmeier H, Kopp S. Conversational agents need to be ‘attentive speakers’ to receive conversational feedback from human interlocutors. Presented at the 5th European Symposium on Multimodal Communication, Bielefeld, Germany

    Communicative listener feedback in human–agent interaction: Artificial speakers need to be attentive and adaptive

    Get PDF
    Buschmeier H, Kopp S. Communicative listener feedback in human–agent interaction: Artificial speakers need to be attentive and adaptive. In: Proceedings of the 17th International Conference on Autonomous Agents and Multiagent Systems. 2018: 1213–1221.In human dialogue, listener feedback is a pervasive phenomenon that serves important functions in the coordination of the conversation, both in regulating its flow, as well as in creating and ensuring understanding between interlocutors. This make feedback an interesting mechanism for conversational human–agent interaction. In this paper we describe computational models for an ‘attentive speaker’ agent that is able to (1) interpret the feedback behaviour of its human interlocutors by probabilistically attributing listening-related mental states to them; (2) incrementally adapt its ongoing language and behaviour generation to their needs; and (3) elicit feedback from them when needed. We present a semi-autonomous interaction study, in which we compare such an attentive speaker agent with agents that either do not adapt their behaviour to their listeners’ needs, or employ highly explicit ways of ensuring understanding. The results show that human interlocutors interacting with the attentive speaker agent provided significantly more listener feedback, felt that the agent was attentive to, and adaptive to their feedback, attested the agent a desire to be understood, and rated it more helpful in resolving difficulties in their understanding

    A model for dynamic minimal mentalizing in dialogue

    Get PDF
    Buschmeier H, Kopp S. A model for dynamic minimal mentalizing in dialogue. Presented at the 12th Biannual Conference of the German Cognitive Science Society (KogWis 2014), TĂŒbingen, Germany
    • 

    corecore